posted 08-07-2010 02:33 PM
Well said Skip.The Federal Examiners Handbook has a description of and definition for a Screening Comparison Question. To me it seems evident that someone or some people thought about the dilemmas in CQs for screening. The description is rather general and broad enough to allow for a reasonable and thoughtful application of the basic psychological and physiological principles that make the polygraph work.
Cognitive dissonance is a term from cognitive behavioral psychology, which assumes that behavior is what matters most (because it is the most observable, and has th greatest impact), and that behavior is mediated by emotion which is regulated by cognition. Cognitive behavioral psychology does not attempt to put all of our professional eggs into one basket (emotion and fear). Instead it is includes of cognition, emotion, and behavioral learning - which is why cognitive behavioral psychologists chose the metaphorical word "dissonance" in attempt to avoid unintended emphasis on the language of emotion (which would only narrow the focus of our attention and knowledge).
I have been using and teaching a very simple explanation for cognitive dissonance, which I learned earlier this year - cognitive dissonance is the difference in your brain and your body between what you know you did and what you are saying now. This seems to me to be a parsimonious and efficient explanation for a variety of know polygraph phenomena - PLCs, DLCs, RQs, polygraph accuracy with psychopaths, exclusive, non-exclusive, and screening or hypothetical CQs.
We seem to have, at times, painted ourselves in to a corner with so many fancy and unproven rules that nobody can remember them all. To an outsider looking in at the polygraph bidness, it would start to seem that for all of our "objectivity" (read: opinionated criticism of others people's failure to follow the rules I like) it would seem that there might not even be such a thing as a "valid" test. There are so many differences of opinion that some form of fault can be found with everything.
Which is why we really need to emphasize a testing approach that is based in evidence and data, not just opinion. The joke seems to be consistently on the smart people who make the mistake of thinking their fancy theorizing and opinionating is helpful when there ain't no data or when the data say otherwise. It is a simple expression of narcissism to think that someone's opinion is more helpful than data and evidence. Yet it happens all the time. I have seen a well known "expert" in the polygraph profession stand up in international conferences and say outrageous things like "I disagree with using DLCs," despite the data that seem to suggest they work just as well as PLCs, and perhaps better in some circumstances.
Just look at time-bars, symptomatics, multi-facet question and decision approaches, and question rotation - and you will see strong opinions of the value and importance of these things - without evidence of support, and some evidence that some of these really might actually cause more damage than improvement to test accuracy (and utility).
What the evidence does seem to say is that the CQT works, and that all kinds of CQs seem to work.
Cognitive dissonance seems to be a nice parsimonious framework for understanding and discussion why they work.
.02
r
.02
r
------------------
"Gentlemen, you can't fight in here. This is the war room."
--(Stanley Kubrick/Peter Sellers - Dr. Strangelove, 1964)